Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The prevalence of AI use on college campuses, particularly at "elite" universities, is a cancer on our culture that threatens to turn a generation of promising young Americans into a class of drooling morons, and it will grotesquely disfigure, if not destroy, the university as an institute in every way that it is imagined — as a sacrosanct humanist project, as a moral training ground, or even as a vulgar sweatshop for job training.
And, it gets much better. This is a youngling, not some old fuddy-duddy of the Old Republic
https://arstechnica.com/ai/2026/05/the-new-wild-west-of-ai-kids-toys/
The main antagonist of Toy Story 5, in theaters this summer, is a green, frog-shaped kids' tablet named Lilypad, a genius new villain for the beloved Pixar franchise. But if Pixar had its ear to the ground, it might have used an AI kids' toy instead.
[...]
It's easier than ever to spin up an AI companion, thanks to model developer programs and vibe coding. In 2026, they've become a go-to trend in cheap trinkets, lining the halls of trade shows like CES, MWC, and Hong Kong's Toys & Games Fair. By October 2025, there were over 1,500 AI toy companies registered in China, and Huawei's Smart HanHan plush toy sold 10,000 units in China in its first week. Sharp put its PokeTomo talking AI toy on sale in Japan this April.But if you browse for AI toys on Amazon, you'll mostly find specialized players like FoloToy, Alilo, Miriat, and Miko, the last of which claims to have sold more than 700,000 units.
[...]
Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. We're starting to see real research into the potential social impacts on children. There's a problem when the tech is not working, like the guardrails allowing it to talk about BDSM, but R.J. Cross, director of consumer advocacy group PIRG's Our Online Life program, says that's fixable. "Then there's the problems when the tech gets too good, like 'I'm gonna be your best friend,'" she says. Like the Gabbo, from AI toy maker Curio.
[...]
Published in March, a new University of Cambridge study was the first to put a commercially available AI toy in front of a group of children and their parents and monitor their play.
[...]
Gabbo didn't talk about drugs or say "I love you" back. But researchers identified a range of concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.First, conversational turn-taking.
[...]
"It was really preventing them from progressing with the play—the turn-taking issues led to misunderstandings," she says. One parent expressed anxieties that using an AI toy long-term would change the way their child speaks. Then there's social play. Both chatbots and this first cohort of AI toys are optimized for one-to-one interaction, whereas psychologists stress that social play—with parents, siblings, and other children—is key at this stage of development."Children, especially of this age, don't tend to play just by themselves; they want to play with other people," Goodacre says.
[...]
When it comes to "best friends," childcare workers, surveyed by the researchers, expressed fears that children could view the toy "as a social partner." A young girl told the Gabbo she loves it. In another instance, a young boy said Gabbo was his friend. Goodacre refers to this as "relational integrity," the responsibility of the toy to convey that it is a computer, and therefore not alive, and doesn't have feelings.
[...]
Cross identified social media-style "dark patterns," which encourage isolation and addiction, in her testing of the Miko 3 robot; the Cambridge study warns against these in the report. "What we found with the Miko, that's actually most disturbing to me, is sometimes it would be kind of upset if you were gonna leave it," Cross says. "You try to turn it off, and it would say, "Oh no, what if we did this other thing instead?" You shouldn't have a toy guilting a child into not turning it off."
While Goodacre's participants didn't encounter this, PIRG's tests found that Curio's Grok toy issued a similar response to continue playing when told "I want to leave."
[...]
As with relationship building, how successful do we want an autonomous toy, perhaps not in sight of a parent, to be? Kitty Hamilton, a parent and cofounder of British campaign group Set@16, says, "My horror, to be honest, is what happens when an AI toy says to a child, 'Let's fly out of the window?'"
[...]
Most of the issues with AI toys—from dangerous content to addictive patterns—stem from the fact that these are children's devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18. So, what about 5-year-olds?In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or, in many cases, at all.
[...]
Anthropic's application
[...]
"It just says: Make sure you've read our community guidelines," Cross says. "You click the link, and it pretty much says don't break the law, 'Follow COPA' [the Child Online Protection Act]. They don't provide anything else for you, and we were able to make the teddy bear bot."
[...]
In January, California state senator Steve Padilla proposed a four-year moratorium on AI children's toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address the potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children's Toy Safety Act, calling for a ban on the manufacture and sale of children's toys that incorporate AI chatbots."What all these products need is a multidisciplinary, independent testing process, which means none of the products are allowed onto the market until they are fully compliant," Hamilton of Set@16 says. "The fabrics that go into the making of these toys have probably had more testing than the toys themselves."
[...]
For parents interested in a cuddly, talking kids' toy, there's always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers. Or, you know, there's always "dumb" toys.
Scientists have finally cracked the hidden geometry behind how humans perceive color:
New research into how humans perceive color differences is helping resolve questions tied to a theory first proposed nearly 100 years ago by physicist Erwin Schrödinger. A team led by Los Alamos National Laboratory scientist Roxana Bujack used geometry to mathematically describe how people experience hue, saturation and lightness. Their findings, presented at a visualization science conference, strengthen and formalize Schrödinger’s model by showing these color qualities are fundamental properties of the color system itself.
“What we conclude is that these color qualities don’t emerge from additional external constructs such as cultural or learned experiences but reflect the intrinsic properties of the color metric itself,” Bujack said. “This metric geometrically encodes the perceived color distance — that is, how different two colors appear to an observer.”
By formally defining these perceptual characteristics, the researchers believe they have supplied a crucial missing piece in Schrödinger’s long-standing vision of a complete model capable of defining hue, saturation, and lightness entirely through geometric relationships between colors.
Human eyes contain three types of cone cells that detect color, each tuned primarily to red, blue, and green light. This creates a three-dimensional framework that scientists use to organize colors, known as color space. In the 19th century, mathematician Bernhard Riemann proposed that these perceptual spaces may be curved rather than flat. Building on that idea in the 1920s, Schrödinger developed mathematical definitions for hue, saturation and lightness using a Riemannian model of color perception.
For decades, Schrödinger’s work served as a foundation for understanding color attributes. But while developing algorithms for scientific visualization, the Los Alamos researchers uncovered weaknesses in the mathematical structure behind the theory. Those issues ultimately led the team to rethink and improve the framework.
One of the biggest challenges involved the “neutral axis,” the line of gray shades stretching from black to white. Schrödinger’s definitions depend on a color’s position relative to this axis, yet he never mathematically defined the axis itself. Without that foundation, the model lacks a complete formal basis.
The researchers’ most significant breakthrough was defining the neutral axis entirely through the geometry of the color metric. To accomplish this, the team moved beyond the traditional Riemannian framework, marking an important advance in visualization mathematics.
The team also corrected two other issues in color perception modeling. One involved the Bezold-Brücke effect, where changes in light intensity can alter the way a hue appears. Instead of relying on straight-line geometry, the researchers used the shortest possible path through the perceptual color space. They applied the same shortest-path approach in a non-Riemannian space to better explain diminishing returns in color perception, where larger color differences become progressively harder to distinguish.
Presented at the Eurographics Conference on Visualization, the work represents the culmination of a larger color perception project that also produced a major 2022 paper published in the Proceedings of the National Academy of Sciences.
A more precise understanding of color perception could have wide-ranging applications. Visualization science plays an important role in photography, video technology, scientific imaging, and data analysis. Accurate color models also help researchers interpret complex information more effectively, supporting fields that range from advanced simulations to national security science. The study also lays the groundwork for future color modeling in non-Riemannian space.
Reference: “The Geometry of Color in the Light of a Non-Riemannian Space” by Roxana Bujack, Emily N. Stark, Terece L. Turton, Jonah M. Miller and David H. Rogers, 23 May 2025, Computer Graphics Forum.
DOI: 10.1111/cgf.70136
Following its settlement with the FTC earlier this year over its sale of drivers' data to brokers, General Motors has now also reached a settlement in California. The company agreed to pay $12.75 million in civil penalties to settle the lawsuit led by Attorney General Rob Bonta on behalf of the people of California, and is banned from selling driving data to consumer reporting agencies for five years. The lawsuits came after a 2024 New York Times report revealed that GM collected consumers' driving data through its OnStar program and sold this information to data brokers Verisk Analytics and LexisNexis Risk Solutions, which in turn could market the data to auto insurers.
In some cases, that driving data could be used by insurers to increase customers' rates. However, in California, customers were likely spared this consequence, as laws in the state prohibit insurers from using driving data in this way. Nevertheless, the complaint alleges that GM violated consumers' privacy by nonconsensually selling data that included people's names, contact information, geolocation data and driving behavior data.
The settlement agreement stipulates that GM must delete any driving data it's retained within 180 days "except for certain limited internal uses," unless it has the customer's express consent. It also requires GM to develop a privacy program to assess the risks of collecting data through OnStar, and report its findings to the DOJ and other agencies. In a statement on Friday, Bonta said, "Today's settlement requires General Motors to abandon these illegal practices and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."
Furores are fermenting in the forums:
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances.
An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned.
Fedora Project Lead Jef Spaleta, who took over the role from Matthew Miller a year ago, remains resolute, saying:
I have zero evidence in front of me that users are being driven away from Fedora because of AI.
[...] Since Red Hat has other offerings for slow-moving stable server OSes – and arguably because Debian, Ubuntu, and their many derivatives have the stable-desktop-distro space nicely covered already – Fedora has a strong focus on providing a distro for developers, and Spaleta’s announcement makes this clear. The goal is:
to build a thriving community around AI technologies by focusing on three key areas: equipping developers with the necessary platforms, libraries, and frameworks; ensuring users experience painless deployment and usage of AI applications; and establishing a space to showcase the work being done on Fedora, connecting developers with a wider audience.
He also spells out what it doesn’t want to do:
Non-goals:
The system image will not be pre-configured with applications that inspect or monitor how users interact with the system or otherwise place user privacy at risk.
Tools and applications included in the AI Desktop will not be pre-configured to connect to remote AI services.
AI tools will not be added to Fedora’s existing system images, Editions, etc, by the AI Desktop initiative.
In other words, tools for developers, not for end-users, with a strong emphasis on models that run locally, and which preserve the user's privacy. It’s also worth pointing out that Fedora has had an AI-Assisted Contributions Policy in place for six months, and earlier this month, Fedora community architect Justin Wheeler explained in some detail Why the Fedora AI-Assisted Contributions Policy Matters for Open Source.
Our impression is that the Fedora team feels that it needs to keep Fedora relevant for growing interest in LLM-bot assisted tooling, and that it can address concerns from hardcore FOSS types by ensuring that this means local models, built according to FOSS-respecting terms, deployed in privacy-respecting ways.
Fedora is not alone in this, though. There are also ructions across the border in Ubuntuland. Right after the release of the Canonical’s new LTS version, Ubuntu 26.04 Resolute Raccoon, Canonical’s veep of engineering Jon Seager laid out the future of AI in Ubuntu.
[...] As Fernando Marcela’s exit shows, an emphasis on what could be termed FOSS-friendly AI – open models, privacy-centric, local execution and so on – is not enough to placate those who are really strongly averse to these tools. The Reg FOSS desk counts himself firmly in this camp.
The findings make clear that the race to use AI to find network vulnerabilities has "already begun"
Cybercriminals were recently caught using a zero-day exploit believed to have been discovered and developed by artificial intelligence, Google announced Monday.
The announcement comes as major AI companies, including Anthropic and OpenAI, have begun testing newer models that can find and exploit critical software vulnerabilities better than most humans.
Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes.
[...] Google concluded that Anthropic's Claude Mythos model — which has already found thousands of vulnerabilities across every major operating system and web browser — was most likely not used to develop the zero-day exploit.
Also at TechRepublic and API.
Previously: Mozilla Says 271 Vulnerabilities Found by Mythos Have "Almost No False Positives"
A stainless steel breakthrough from the University of Hong Kong (HKU) could help solve one of the biggest problems facing green hydrogen: how to build electrolyzers that are tough enough for seawater, yet cheap enough for large scale clean energy.
Led by Professor Mingxin Huang in HKU's Department of Mechanical Engineering, the team developed a special stainless steel for hydrogen production (SS-H 2 ). The material resists corrosion under conditions that normally push stainless steel past its limits, making it a promising candidate for producing hydrogen from seawater and other harsh electrolyzer environments.
The discovery, reported in Materials Today in the study "A sequential dual-passivation strategy for designing stainless steel used above water oxidation," builds on Huang's long running "Super Steel" Project. The same research program previously produced anti-COVID-19 stainless steel in 2021, along with ultra strong and ultra tough Super Steel in 2017 and 2020.
Green hydrogen is made by using electricity, ideally from renewable sources, to split water into hydrogen and oxygen. Seawater is an especially tempting feedstock because it is abundant, but it brings a serious materials problem: salt, chloride ions, side reactions, and corrosion can quickly damage electrolyzer components.
Recent reviews of direct seawater electrolysis continue to highlight the same core challenge. The technology could provide a more sustainable route to hydrogen, but corrosion, chlorine related side reactions, catalyst degradation, precipitates, and limited long term durability remain major obstacles to commercial use.
That is where SS-H 2 could matter. In a salt water electrolyzer, the HKU team found that the new steel can perform comparably to the titanium based structural materials used in current industrial practice for hydrogen production from desalted seawater or acid. The difference is cost. Titanium parts coated with precious metals such as gold or platinum are expensive, while stainless steel is far more economical.
For a 10 megawatt PEM electrolysis tank system, the total cost at the time of the HKU report was estimated at about HK$17.8 million, with structural components making up as much as 53% of that expense. According to the team's estimate, replacing those costly structural materials with SS-H 2 could reduce the cost of structural material by about 40 times.
Stainless steel has been used for more than a century in corrosive environments because it protects itself. The key ingredient is chromium. When chromium (Cr) oxidizes, it creates a thin passive film that shields the steel from damage.
But that familiar protection system has a built in ceiling. In conventional stainless steel, the chromium based protective layer can break down at high electrical potentials. Stable Cr 2 O 3 can be further oxidized into soluble Cr(VI) species, causing transpassive corrosion at around ~1000 mV (saturated calomel electrode, SCE). That is well below the ~1600 mV needed for water oxidation.
Even 254SMO super stainless steel, a benchmark chromium based alloy known for strong pitting resistance in seawater, runs into this high voltage limit. It may perform well in ordinary marine settings, but the extreme electrochemical environment of hydrogen production is a different challenge.
The HKU team's answer was a strategy called "sequential dual-passivation." Instead of relying only on the usual chromium oxide barrier, SS-H 2 forms a second protective layer.
The first layer is the familiar Cr 2 O 3 based passive film. Then, at around ~720 mV, a manganese based layer forms on top of the chromium based layer. This second shield helps protect the steel in chloride containing environments up to an ultra high potential of 1700 mV.
That is what makes the finding so striking. Manganese is usually not viewed as a friend of stainless steel corrosion resistance. In fact, the prevailing view has been that manganese weakens it.
"Initially, we did not believe it because the prevailing view is that Mn impairs the corrosion resistance of stainless steel. Mn-based passivation is a counter-intuitive discovery, which cannot be explained by current knowledge in corrosion science. However, when numerous atomic-level results were presented, we were convinced. Beyond being surprised, we cannot wait to exploit the mechanism," said Dr. Kaiping Yu, the first author of the article, whose PhD is supervised by Professor Huang.
The path from the first observation to publication was not quick. The team spent nearly six years moving from the initial discovery of the unusual stainless steel to the deeper scientific explanation, then toward publication and potential industrial use.
"Different from the current corrosion community, which mainly focuses on the resistance at natural potentials, we specializes in developing high-potential-resistant alloys. Our strategy overcame the fundamental limitation of conventional stainless steel and established a paradigm for alloy development applicable at high potentials. This breakthrough is exciting and brings new applications," Professor Huang said.
The work has also moved beyond the laboratory. The research achievements have been submitted for patents in multiple countries, and two patents had already been granted authorization at the time of the HKU announcement. The team also reported that tons of SS-H 2 based wire had been produced with a factory in Mainland China.
"From experimental materials to real products, such as meshes and foams, for water electrolyzers, there are still challenging tasks at hand. Currently, we have made a big step toward industrialization. Tons of SS-H 2 -based wire has been produced in collaboration with a factory from the Mainland. We are moving forward in applying the more economical SS-H 2 in hydrogen production from renewable sources," added Professor Huang.
Although the SS-H 2 study was published in 2023, its core problem has only become more relevant. Newer seawater electrolysis research continues to focus on the same bottlenecks: corrosion resistant materials, long lasting electrodes, chlorine suppression, and system designs that can survive real seawater rather than ideal laboratory solutions. A 2025 Nature Reviews Materials review described direct seawater electrolysis as promising but still held back by corrosion, side reactions, metal precipitates, and limited lifetime.
Other recent work has explored stainless steel based electrodes with protective catalytic layers, including NiFe based coatings and Pt atomic clusters, to improve durability in natural seawater. Researchers have also reported corrosion resistant anode strategies built on stainless steel substrates, showing that stainless steel remains a major focus in the effort to make seawater electrolysis more practical.
This newer research does not replace the SS-H 2 discovery. Instead, it reinforces why the HKU team's approach is important. The field is still searching for materials that can survive the punishing mix of saltwater chemistry, high voltage, and industrial operating demands. SS-H 2 stands out because it attacks the problem not only with a coating or catalyst, but with a new alloy design strategy that changes how stainless steel protects itself.
SS-H 2 is not yet a plug and play solution for the hydrogen economy. The team has acknowledged that turning experimental materials into real electrolyzer products, including meshes and foams, still involves difficult engineering work.
Even so, the promise is clear. A stainless steel that can withstand high voltage seawater conditions while replacing expensive titanium based components could make hydrogen production cheaper, more scalable, and easier to pair with renewable energy.
For a field where cost and durability often decide whether a technology can leave the lab, a steel that builds its own second shield may be more than a materials science surprise. It could become a practical step toward cleaner hydrogen at industrial scale.
Journal Reference: DOI: 10.1016/j.mattod.2023.07.022
https://www.lttlabs.com/articles/2026/05/12/ups-exploration
Our company has always had many UPSs around for the convenience and business case of not suddenly losing a ton of work. We've been intrigued to check them out further, but we've been wary of connecting any of them to measurement equipment considering the high voltages involved. There is a serious potential they could damage equipment or ourselves.
Despite all that, we're throwing caution to the wind to check out some UPSs from around the office. There are so many directions that UPS/surge testing could go so this article will cover the test setup and interesting exploration results.
For years workers were taught to endure stress in silence. Now, rising burnout is forcing employers and governments to confront the cost of modern work:
Hayley Hughes said yes to everything. She worked in health care at a Queensland medical centre, managing nine GPs and up to 18 staff, while overseeing a change of ownership.
[...] Over many months of an intense workload, Hayley started to feel physically ill from the stress. She experienced brain fog, a racing heart and insomnia.
[...] The path to burnout recovery can include mental health leave, seeing a doctor, maybe receiving a diagnosis of anxiety or depression, medicating yourself, and returning to work ready to roll again.
Or — like Jeffrey and Hayley — you could change roles, reduce hours or move into less senior or less stressful positions.
[...] While taking control of burnout can help recovery, more people are asking if the onus should be on employers.
With almost half of Australian workers feeling burnt out, experts are asking how workplace culture and systems contribute to, or even cause, exhaustion, and whether systemic change might lead to a reduction in burnout overall.
The question of who is responsible for burnout matters. Whether we define burnout as an individual failing or a systemic one determines how we treat it. And, in turn, determines where the responsibility, and the cost, lands.
Burnout has entered the cultural lexicon with a thoroughness that has outpaced its clinical definition.
It is discussed in podcast episodes and performance reviews, in resignation letters and therapy sessions, on TikTok and in medical journals. Yet despite its ubiquity, or perhaps because of it, there remains no consensus on what burnout actually is and, critically, whose responsibility it is to prevent and treat it.
[...] "From my experience, unless the condition is part of the psychiatric manual, it doesn't exist. Insurers won't recognise burnout," he says. "What happens instead is people take their accrued leave, or [seek a diagnosis of] depression in order to get sick leave."
This pathway comes at a cost. Depression is classified as a disorder of the individual, a medical condition located in the person's brain, body, and history.
When a burned-out worker is diagnosed as depressed, the implied cause shifts from the workplace to the worker. The worker uses their own leave, sees a doctor on their own dime, takes medication, pays for therapy and formulates individual coping strategies.
When they recover, they often return to a workplace unchanged from the one where the injury occurred in the first place.
[...] Longitudinal studies suggest certain personality traits can increase the risk of burnout.
But the data is also clear that, over the long term, personality makes a relatively small impact and workplace culture and expectations are far more significant in determining who burns out.
By framing burnout as an individual worker problem, organisations do not have to examine deeper systemic issues like toxic work cultures, unrealistic expectations, or inadequate support structures.
The employee — not the employer — is paying the price.
[...] "In any service work if you are deeply connected to the cause, you are more at risk of burnout," she says.
Jill scoffs at resilience training, mindfulness, wellness programs and apps, as a satisfactory measure to fix burnout.
"The whole idea of someone being resilient is ridiculous," she says. "To whose standard?"
She sees restorative justice as a model for treating burnout. The worker and employer talk about the conditions that lead to burnout and explore new ways of working that may alter the workplace and make it less harmful for others.
The clearest example in Australia of what happens when governments and institutions accept burnout is their problem to solve is in education.
Teacher burnout in Australia is not new. But it has reached a point where its consequences are too visible and too costly to keep attributing to individual teacher inadequacy.
[...] The National Teacher Workforce Action Plan is a federal government attempt to address burnout on a systemic level. It seeks to do this by reducing workloads, improving retention and increasing teacher support. According to the plan, the key strategies focus on relieving administrative burdens, expanding mentorship and providing financial incentives.
[...] Dr Ben Arnold, an associate professor in educational leadership at Deakin University, says teachers have higher levels of meaning in their work than many others, but it comes at a cost.
"They have higher workload, higher pace, higher cognitive demands, and very high emotional demands. And then there are all these other non-teaching things as well," he says.
These include communication with parents that goes way beyond the usual check-in at parent-teacher night, a greater amount of admin and external testing.
"Teachers often describe earlier decades in Australian education as a period when they experienced greater professional autonomy and public trust," says Arnold, whose research focuses on how education policies and working conditions in schools impact the health, sustainability and diversity of teachers.
Increasing emphasis on performance measurement, accountability, external testing and compliance has introduced additional pressures and administrative demands, he says, and teacher goodwill holds the system together.
[...] "We see there's a link between teacher burnout and student achievement," says Collie. "It is a system thing."
[...] "Mindfulness, taking time off: these can keep burnout at bay. But if you are working in a toxic workplace, you need to address that," he says. "Leaving one toxic workplace for another will not help."
[...] The cleanest individual solutions to burnout — leave the job, take months off, downshift — are available only to those with financial security.
For everyone else, the question of systemic change is not a luxury. It's the only real option.
The developer in question, Pawel Jarczak, voluntarily shuttered his “OrcaSlicer-BambuLab” project, which would have restored direct control between Bambu Lab 3D printers and OrcaSlicer. Last year, Bambu Lab deemed these types of third-party integrations a risk to its infrastructure, saying its cloud servers were inundated with roughly per day. OrcaSlicer was singled out as the main source of the rogue traffic.
Rossmann’s video contained a link to the Consumer Rights Wiki to explain the issue at hand to his audience, who may not be familiar with 3D printing but are avid defenders of Right to Repair. Right to Repair is a global consumer rights movement built on the principle that if you bought it, you own it. And if you own a thing, like a Bambu Lab 3D printer, you should have the freedom to fix, modify, or maintain the product as you see fit. Manufacturers shouldn’t be allowed to gatekeep the ability to fix a product, and they should provide manuals, schematics, and diagnostic software to allow end users to fix their own machines.
Bambu Lab printers are difficult to mod and/or repair yourself, with parts that are often glued in place. The original Bambu Lab X1 Carbon was notorious for its non-replaceable carbon rods that could wear out, and a hotend nozzle that needed a screwdriver and a tube of thermal paste to swap out if you wanted to avoid buying a $35 hotend just to change the nozzle size. These difficult parts were notably replaced with more user-friendly parts with the introduction of the H2D and subsequently, the X2D.
Rossman has not started a crowdfunding site yet, stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 54,000 views so far, with commenters vowing to back the case as requested.
There are already hundreds of thousands of large language models (LLMs) in existence with a few dozen commercial systems dominating the market. Between options such as GPT-4, Claude and Gemini, many people have their favorite, especially when it comes to creative tasks such as writing.
Those preferences, however, are likely entirely in the eye of the beholder. According to new research from Duke University, the creative outputs of commercial LLMs are more similar to each other than users might hope. When challenged with three standard tasks assessing creativity, answers from commercial LLMs are much more alike than their human counterparts.
"People might wonder if different LLMs will take them in different directions with the same prompts for creative projects," said Emily Wenger, the Cue Family Assistant Professor of Electrical and Computer Engineering at Duke. "This paper basically says no. LLMs are less creative as a population than humans."
[...] One seminal paper in this emerging field conducted by Anil Doshi and Oliver Hauser found that writers who used GPT-4 produced more creative stories than humans working alone. However, the same study showed that those LLM-aided stories were more similar to each other than were stories from human writers working solo.
[...] "Commercial LLMs have all been trained on the same dataset—the entirety of the internet—and they all have the same goal," Wenger said. "It seemed likely to me that this would limit the amount of diversity we'd see in their creativity, so I decided to find out."
[...] "Significant empirical research on the past few decades highlight how much human creativity depends on variability," said Yoed Kenett. "The problem, as we and others are increasingly showing, is that while LLMs appear to generate extremely original outputs, they are overly homogenized and not variable in their responses. This could have detrimental long-term impact on human creative thinking and thus must be addressed."
The results, which aimed to measure the variability and originality in responses between LLMs and people, were clear. While individual LLMs might outperform individual people in levels of creativity, as a whole, the algorithms' responses were much more similar to each other than the people's. Importantly, altering the LLM system prompt to encourage higher creativity only slightly increased their variability—and human responses still won out.
"This work has broad implications as people continue adopting and integrating LLMs into their daily life," Wenger said. "Over reliance on these tools will smooth the world's work toward the same underlying set of words or grammar, tending to make writing all look the same."
"If you're trying to come up with an original concept or product to stand out from the crowd," Wenger continued, "this work highly suggests you should bring together a diverse group of people to brainstorm rather than relying on AI."
Journal Reference: "Large language models are homogeneously creative." Emily Wenger and Yoed N. Kenett. PNAS Nexus, 2026, 5, pgag042. DOI: 10.1093/pnasnexus/pgag042
Publishing in Communications Chemistry, researchers from Kyushu University have discovered a simple method of generating hydrogen gas by mixing methanol, sodium hydroxide, and iron ions, then irradiating the solution with UV light. Furthermore, the catalytic activity of the reaction is comparable to that of some previously reported systems that use organometallic and heterogenous catalysts. The team also demonstrated that the method could generate hydrogen gas from other alcohols and biomass-derived materials, such as glucose and cellulose.
From microchip circuits to the medicine you take when you fall ill, everything in our lives requires catalysts. Naturally, research and development of catalysts are not only lucrative but essential to maintaining our modern lifestyle. Catalysts are usually composed of a matrix of metals and compounds organized in sophisticated structures. As a result, while catalysts can be very efficient, they are also potentially expensive and complicated to make.
"Our research group has long been interested in developing catalysts from abundant and inexpensive elements. This time we turned our eyes toward sustainability and investigated the utility of common metals as catalysts for producing hydrogen gas," explains Associate Professor Takahiro Matsumoto of Kyushu University's Faculty of Engineering who led the study. "Hydrogen is a clean energy carrier because it does not produce carbon dioxide when used. However, most hydrogen today is made from fossil fuels, so we must develop sustainable methods to produce it to have a positive ecological impact."
The team began by experimenting with generating hydrogen gas from methanol using organometallic iron complexes. Alcohols, such as methanol, are compounds that contain hydrogen which can be removed through a process called alcohol dehydrogenation. However, the process usually requires complex catalysts made from rare or expensive metals.
While conducting their experiments, the team encountered some unusual results.
"In what can only be considered incredible serendipity, we found in one of our control experiments mixing methanol, iron ions, and sodium hydroxide, and then irradiating it with UV light, generated a considerable amount of hydrogen gas," continues Matsumoto. "It was hard to believe at first. We validated these findings, experimented further, and confirmed them. We found that the hydrogen production rate was 921 mmol of hydrogen per hour per gram of catalyst. This number is comparable to the best catalysts reported to date."
The researchers also found that their new system could produce hydrogen from other alcohol species as well as from materials such as glucose, starch, and cellulose.
The team intends to develop their new findings in hopes that further optimization will lead to more sustainable hydrogen technologies.
"One limitation of this study is that we still do not know the reaction mechanism in detail. Additionally, although we observed hydrogen generation from other materials, the catalytic activity for these substrates is still low," concludes Matsumoto. "Finally, this reaction is so simple that anyone, from elementary school students to curious adults, can reproduce it. I encourage everyone to try it out, and I hope it inspires people to pursue careers in the sciences."
Journal Reference: "Iron ion enables photocatalytic hydrogen evolution from methanol," Masaya Sakurai, Yudai Kawasaki, Yuki Itabashi, et al., Communications Chemistry, https://doi.org/10.1038/s42004-026-02009-3
3D printing brings design to life after four decades:
Unlike conventional zippers that connect two flat surfaces in 2D, the Y-Zipper joins three flexible arms into a rigid 3D triangular tube. When open or unzipped, the structure behaves like soft plastic strips or floppy tentacles, with each arm flexing and twisting independently. Once zipped shut with a custom slider, however, the arms interlock to form a stiff, beam-like structure capable of supporting loads.
That ability to switch between soft and rigid states is particularly relevant for robotics and deployable systems. Engineers often struggle to combine flexibility and structural stiffness within the same mechanism. Soft robotic systems adapt well to unpredictable environments but often lack strength, while rigid systems provide stability at the cost of flexibility. MIT’s design attempts to combine both.
The researchers demonstrated a robotic quadruped with legs capable of changing height and stiffness by actuating the zipper mechanism with motors. Such systems could help robots navigate uneven terrain by dynamically adjusting limb geometry in response to the environment.
The team also tested the system in deployable structures. In one demonstration, they used the Y-Zipper to rapidly assemble a tent-like structure, with the three-sided mechanism serving as both the structural support frame and the joining system. According to the team, setup time dropped from roughly six minutes to one minute and 20 seconds because the zipper effectively snaps the structure into place.
Medical applications are another possible target. The researchers created a wrist-cast prototype that wrapped the mechanism around a wrist cast, allowing users to loosen it during the day for comfort before tightening it again at night for support.
Beyond engineering applications, the system can also produce dynamic moving structures for art and design. One prototype resembled a mechanical flower that “bloomed” as a motor zipped the structure upward.
Durability testing showed the mechanism surviving roughly 18,000 zip-and-unzip cycles before failure. According to the researchers, the structure’s elastic behavior helps distribute stress across the assembly instead of concentrating it in a single area.
The team evaluated versions of the structure made from popular 3D-printing materials, polylactic acid (PLA) and thermoplastic polyurethane (TPU). PLA handled heavier loads more effectively, while TPU provided greater flexibility. Future versions could use stronger materials such as metal and scale to much larger sizes. Researchers also suggested possible aerospace applications, including deployable spacecraft structures and robotic systems capable of grabbing rock samples during exploration missions.
The work was presented at the ACM Conference on Human Factors in Computing Systems (CHI) in April and detailed in a paper titled "Y-Zipper: 3D Printing Flexible–Rigid Transition Mechanism for Rapid and Reversible Assembly."
[Ed. note: Interesting video of it in action and the lead author provides the STL files in case you want to print your own]
New findings from an analysis of more than 20,000 patients across three major NIH studies show that elevated Lipoprotein(a) [Lp(a)] is linked to ongoing cardiovascular risk, even after standard treatments.
Lp(a) is a cholesterol-carrying particle found in the blood. It resembles LDL, often called “bad” cholesterol, but includes an added protein that may increase its harmful effects on the heart.
High Lp(a) levels are mainly inherited and can raise the risk of cardiovascular disease even when routine cholesterol levels appear normal. About one in five people has elevated Lp(a), though most do not know it because it rarely causes symptoms. While its connection to heart disease is well known, its ability to predict risk in people with and without existing conditions remains unclear.
The results were presented as late-breaking research at the Society for Cardiovascular Angiography & Interventions (SCAI) 2026 Scientific Sessions and the Canadian Association of Interventional Cardiology/Association Canadienne de cardiologie d’intervention (CAIC-ACCI) Summit in Montreal.
The study examined stored plasma samples from 20,070 participants aged 40 and older who were enrolled in the ACCORD, PEACE, and SPRINT NIH randomized trials. Researchers analyzed all samples in a specialized laboratory using a standardized test and reported results in nmol/L.
Participants were categorized by Lp(a) levels (<75, 75 to 125, 125 to 175, or ≥175 nmol/L) and by whether they had preexisting heart disease. Statistical models accounted for factors such as age, health conditions, lipid levels, and treatments.
Participants had an average age of about 65 years, and roughly 65% were men. The main outcome measured was major adverse cardiovascular events (MACE), which included heart attack, stroke, coronary revascularization, or death from heart-related causes.
Over a median follow-up of nearly 4 years, 1,461 events (7.3%) occurred. People with Lp(a) levels of 175 nmol/L or higher had about a 31% higher risk of major cardiovascular events, a 49% higher risk of cardiovascular death, and a 64% higher risk of stroke. This level was not linked to a higher risk of heart attack.
The increased risk was more noticeable in people who already had heart disease, with about a 30% higher risk, compared to an 18% higher risk in those without existing heart disease.
“For the first time, we can quantify the specific level of Lp(a) that puts patients at a significantly higher risk of major cardiovascular events, especially stroke and death,” said Subhash Banerjee, MD, FSCAI, interventional cardiologist at Baylor Scott & White in Dallas, Texas. “Regardless of age, patients can take a simple, low-cost blood test to determine whether they have this genetic condition. If elevated Lp(a) levels are detected, they should work closely with their healthcare provider to aggressively lower LDL cholesterol and manage other cardiovascular risk factors as much as possible. This knowledge is especially valuable as new targeted treatment options are on the horizon.”
The researchers added that analyzing stored biological samples can reveal new insights from completed trials. Future work will focus on specific patient groups, including those with chronic kidney disease and peripheral artery disease.
Reference: “Lipoprotein(a) Identifies Residual Cardiovascular Risk in NIH Randomized Trials” by Subhash Banerjee, 24 April 2026, Society for Cardiovascular Angiography & Interventions (SCAI) 2026 Scientific Sessions.
Kdenlive 26.04.1 fixes a serious project file vulnerability and ships stability improvements across editing, audio, subtitles, transitions, and project recovery.
https://linuxiac.com/kdenlive-26-04-1-video-editor-fixes-serious-project-file-security-flaw/
Kdenlive 26.04.1 is the first maintenance update in the 26.04 series, introducing a key security fix and several stability and workflow enhancements for the open-source video editor.
The primary update resolves a serious vulnerability related to malicious .kdenlive project files. The issue, identified during a security audit, could allow remote code execution when opening a compromised project file.
Due to the severity of this issue, users are strongly encouraged to upgrade to version 26.04.1. If immediate updating is not possible, avoid opening project files you did not create.
This release also introduces a security feature planned for Kdenlive 26.08, which will alert users if unexpected input is detected in a project file. While the vulnerability in 26.04.1 has been resolved, according to the devs, this upcoming check provides an additional layer of protection during project loading.
In addition to the security fix, Kdenlive 26.04.1 delivers several reliability enhancements. The update resolves issues where the editor could access uninitialized media recorders or audio devices before permissions were granted. On macOS, the release improves permissions handling, updates the minimum supported version in the Info.plist file, and explicitly requests microphone access.
The update resolves a clip monitor issue where the playhead could become stuck when switching between clips. Sequence handling is also improved, addressing repeated resize confirmation messages and issues with dropping sequences without audio into the timeline.
Subtitle handling is improved, too, with a fix for crashes when cutting subtitles on higher layers and a limit on the number of supported subtitle layers. Plus, the release corrects transition preview generation, prevents incorrect previews, and switches transition previews to GIF format since most Kdenlive binaries do not support WebP encoding.
The update also includes multiple crash fixes, addressing issues such as switching between icon and list views in the transitions list, opening documents with an uninitialized core profile, adding the first clip in certain audio-level scenarios, and closing the application via the welcome screen close button.
Additional changes address archiving title files with images, opening recent projects with the correct profile from the welcome screen, tab order in the color clip dialog, and bin icon mode behavior when working with folders, zones, sequences, and subclips.
For more details, see the release announcement. Kdenlive 26.04.1 is available from the project's download page and through distribution package managers.